Goto

Collaborating Authors

 biologically plausible neural network



Manifold-tiling Localized Receptive Fields are Optimal in Similarity-preserving Neural Networks

Anirvan Sengupta, Cengiz Pehlevan, Mariano Tepper, Alexander Genkin, Dmitri Chklovskii

Neural Information Processing Systems

Many neurons in the brain, such as place cells in the rodent hippocampus, have localized receptive fields, i.e., they respond to a small neighborhood of stimulus space. What is the functional significance of such representations and how can they arise?


Manifold-tiling Localized Receptive Fields are Optimal in Similarity-preserving Neural Networks

Anirvan Sengupta, Cengiz Pehlevan, Mariano Tepper, Alexander Genkin, Dmitri Chklovskii

Neural Information Processing Systems

Many neurons in the brain, such as place cells in the rodent hippocampus, have localized receptive fields, i.e., they respond to a small neighborhood of stimulus space. What is the functional significance of such representations and how can they arise?



A Biologically Plausible Neural Network for Slow Feature Analysis

Neural Information Processing Systems

Learning latent features from time series data is an important problem in both machine learning and brain function. One approach, called Slow Feature Analysis (SFA), leverages the slowness of many salient features relative to the rapidly varying input signals. Furthermore, when trained on naturalistic stimuli, SFA reproduces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features. However, despite the potential relevance of SFA for modeling brain function, there is currently no SFA algorithm with a biologically plausible neural network implementation, by which we mean an algorithm operates in the online setting and can be mapped onto a neural network with local synaptic updates. In this work, starting from an SFA objective, we derive an SFA algorithm, called Bio-SFA, with a biologically plausible neural network implementation.


Review for NeurIPS paper: A Biologically Plausible Neural Network for Slow Feature Analysis

Neural Information Processing Systems

Summary and Contributions: This paper produces a so-called biologically plausible neural network for slow-feature analysis. Biological plausibility here means that network learning is online and based on local synaptic learning rules. These online and locality requirements might lead to low computational overhead. While Foldiak, Wiskott, and many others have explored online local learning for SFA in the last thirty years, this paper attempts to relate SFA to a normative theory through an MDS objective.


Review for NeurIPS paper: A Biologically Plausible Neural Network for Slow Feature Analysis

Neural Information Processing Systems

All reviewers agreed that the proposed normative approach to deriviing a biologically plausible implementation of SFA in multiple dimensions represents a strong contribution.


A Biologically Plausible Neural Network for Slow Feature Analysis

Neural Information Processing Systems

Learning latent features from time series data is an important problem in both machine learning and brain function. One approach, called Slow Feature Analysis (SFA), leverages the slowness of many salient features relative to the rapidly varying input signals. Furthermore, when trained on naturalistic stimuli, SFA reproduces interesting properties of cells in the primary visual cortex and hippocampus, suggesting that the brain uses temporal slowness as a computational principle for learning latent features. However, despite the potential relevance of SFA for modeling brain function, there is currently no SFA algorithm with a biologically plausible neural network implementation, by which we mean an algorithm operates in the online setting and can be mapped onto a neural network with local synaptic updates. In this work, starting from an SFA objective, we derive an SFA algorithm, called Bio-SFA, with a biologically plausible neural network implementation.


Manifold-tiling Localized Receptive Fields are Optimal in Similarity-preserving Neural Networks

Sengupta, Anirvan, Pehlevan, Cengiz, Tepper, Mariano, Genkin, Alexander, Chklovskii, Dmitri

Neural Information Processing Systems

Many neurons in the brain, such as place cells in the rodent hippocampus, have localized receptive fields, i.e., they respond to a small neighborhood of stimulus space. What is the functional significance of such representations and how can they arise? Here, we propose that localized receptive fields emerge in similarity-preserving networks of rectifying neurons that learn low-dimensional manifolds populated by sensory inputs. Numerical simulations of such networks on standard datasets yield manifold-tiling localized receptive fields. More generally, we show analytically that, for data lying on symmetric manifolds, optimal solutions of objectives, from which similarity-preserving networks are derived, have localized receptive fields. Therefore, nonnegative similarity-preserving mapping (NSM) implemented by neural networks can model representations of continuous manifolds in the brain.